7 research outputs found

    Rodent arena tracker (RAT): A machine vision rodent tracking camera and closed loop control system

    Get PDF
    Video tracking is an essential tool in rodent research. Here, we demonstrate a machine vision rodent tracking camera based on a low-cost, open-source, machine vision camera, the OpenMV Cam M7. We call our device the rodent arena tracker (RAT), and it is a pocket-sized machine vision-based position tracker. The RAT does not require a tethered computer to operate and costs about $120 per device to build. These features make the RAT scalable to large installations and accessible to research institutions and educational settings where budgets may be limited. The RAT processes incoming video in real-time at 15 Hz and save

    Accelerated and Improved Differentiation of Retinal Organoids from Pluripotent Stem Cells in Rotating-Wall Vessel Bioreactors

    No full text
    Summary: Pluripotent stem cells can be differentiated into 3D retinal organoids, with major cell types self-patterning into a polarized, laminated architecture. In static cultures, organoid development may be hindered by limitations in diffusion of oxygen and nutrients. Herein, we report a bioprocess using rotating-wall vessel (RWV) bioreactors to culture retinal organoids derived from mouse pluripotent stem cells. Organoids in RWV demonstrate enhanced proliferation, with well-defined morphology and improved differentiation of neurons including ganglion cells and S-cone photoreceptors. Furthermore, RWV organoids at day 25 (D25) reveal similar maturation and transcriptome profile as those at D32 in static culture, closely recapitulating spatiotemporal development of postnatal day 6 mouse retina in vivo. Interestingly, however, retinal organoids do not differentiate further under any in vitro condition tested here, suggesting additional requirements for functional maturation. Our studies demonstrate that bioreactors can accelerate and improve organoid growth and differentiation for modeling retinal disease and evaluation of therapies

    Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks

    No full text
    © 2017 Society of Photo-Optical Instrumentation Engineers (SPIE). Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p\u3c0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature

    Fully automated prostate whole gland and central gland segmentation on MRI using holistically nested networks with short connections

    No full text
    © 2019 Society of Photo-Optical Instrumentation Engineers (SPIE). Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution (z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature
    corecore